sentiment analysis method
Three-Class Text Sentiment Analysis Based on LSTM
Sentiment analysis is a crucial task in natural language processing (NLP) with applications in public opinion monitoring, market research, and beyond. This paper introduces a three-class sentiment classification method for Weibo comments using Long Short-Term Memory (LSTM) networks to discern positive, neutral, and negative sentiments. LSTM, as a deep learning model, excels at capturing long-distance dependencies in text data, providing significant advantages over traditional machine learning approaches. Through preprocessing and feature extraction from Weibo comment texts, our LSTM model achieves precise sentiment prediction. Experimental results demonstrate superior performance, achieving an accuracy of 98.31% and an F1 score of 98.28%, notably outperforming conventional models and other deep learning methods. This underscores the effectiveness of LSTM in capturing nuanced sentiment information within text, thereby enhancing classification accuracy. Despite its strengths, the LSTM model faces challenges such as high computational complexity and slower processing times for lengthy texts. Moreover, complex emotional expressions like sarcasm and humor pose additional difficulties. Future work could explore combining pre-trained models or advancing feature engineering techniques to further improve both accuracy and practicality. Overall, this study provides an effective solution for sentiment analysis on Weibo comments.
Implicit Sentiment Analysis Based on Chain of Thought Prompting
Implicit Sentiment Analysis (ISA) is a crucial research area in natural language processing. Inspired by the idea of large language model Chain of Thought (CoT), this paper introduces a Sentiment Analysis of Thinking (SAoT) framework. The framework first analyzes the implicit aspects and opinions in the text using common sense and thinking chain capabilities. Then, it reflects on the process of implicit sentiment analysis and finally deduces the polarity of sentiment. The model is evaluated on the SemEval 2014 dataset, consisting of 1120 restaurant reviews and 638 laptop reviews. The experimental results demonstrate that the utilization of the ERNIE-Bot-4+SAoT model yields a notable performance improvement. Specifically, on the restaurant dataset, the F1 score reaches 75.27, accompanied by an ISA score of 66.29. Similarly, on the computer dataset, the F1 score achieves 76.50, while the ISA score amounts to 73.46. Comparatively, the ERNIE-Bot-4+SAoT model surpasses the BERTAsp + SCAPt baseline by an average margin of 47.99%.
- Asia > China > Shanghai > Shanghai (0.05)
- North America > United States > California (0.04)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Information Extraction (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Discourse & Dialogue (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
FinLlama: Financial Sentiment Classification for Algorithmic Trading Applications
Konstantinidis, Thanos, Iacovides, Giorgos, Xu, Mingxue, Constantinides, Tony G., Mandic, Danilo
There are multiple sources of financial news online which influence market movements and trader's decisions. This highlights the need for accurate sentiment analysis, in addition to having appropriate algorithmic trading techniques, to arrive at better informed trading decisions. Standard lexicon based sentiment approaches have demonstrated their power in aiding financial decisions. However, they are known to suffer from issues related to context sensitivity and word ordering. Large Language Models (LLMs) can also be used in this context, but they are not finance-specific and tend to require significant computational resources. To facilitate a finance specific LLM framework, we introduce a novel approach based on the Llama 2 7B foundational model, in order to benefit from its generative nature and comprehensive language manipulation. This is achieved by fine-tuning the Llama2 7B model on a small portion of supervised financial sentiment analysis data, so as to jointly handle the complexities of financial lexicon and context, and further equipping it with a neural network based decision mechanism. Such a generator-classifier scheme, referred to as FinLlama, is trained not only to classify the sentiment valence but also quantify its strength, thus offering traders a nuanced insight into financial news articles. Complementing this, the implementation of parameter-efficient fine-tuning through LoRA optimises trainable parameters, thus minimising computational and memory requirements, without sacrificing accuracy. Simulation results demonstrate the ability of the proposed FinLlama to provide a framework for enhanced portfolio management decisions and increased market returns. These results underpin the ability of FinLlama to construct high-return portfolios which exhibit enhanced resilience, even during volatile periods and unpredictable market events.
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Research Report > New Finding (0.34)
- Research Report > Promising Solution (0.34)
- Overview > Innovation (0.34)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Information Extraction (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Discourse & Dialogue (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Araujo
Sentiment analysis became a hot topic, specially with the amount of opinions available in social media data. With the increasing interest in this theme, several methods have been proposed in the literature. Recent efforts have showed that there is no single method that always achieves the best prediction performance for different datasets. Additionally, novel methods have not being extensively compared with other methods and across different datasets, specially methods that are not designed to the English language. Consequently, researchers tend to accept any popular method as a valid methodology to measure sentiments, a practice that is usual in science. In this context, we propose iFeel 2.0, an online web system that implements 19 sentence-level sentiment analysis methods and allows users to easily label a dataset with all of them.
Mental Health Alerts via Facebook? - The Crux
Every day, 730,000 comments and 420 billion statuses are posted on Facebook, 500 billion 140-character tweets are posted and 430,000 hours of new video is uploaded to YouTube. The Internet is a goldmine of data just waiting to be analyzed. Ever since social media crept deeper and deeper into our daily lives, governments and advertisers have been utilizing this data for myriad purposes. Now, a team of researchers at the University of Ottawa, University of Alberta and the Université de Montpellier in France is examining ways to use social media data to detect and monitor people who are potentially at risk of mental health issues. Using computer algorithms, the team will apply social web mining and "sentiment analysis methods" to troves of data generated through social media to detect at-risk individuals. Sentiment analysis is the process of identifying and categorizing opinions expressed in text through a computer program.
- North America > Canada > Alberta (0.56)
- Europe > France > Occitanie > Hérault > Montpellier (0.25)
- Asia > Singapore (0.05)
- Asia > China > Jiangsu Province > Nanjing (0.05)
Limits of Electoral Predictions Using Twitter
Gayo-Avello, Daniel (Universidad de Oviedo) | Metaxas, Panagiotis Takis (Wellesley College) | Mustafaraj, Eni (Wellesley College)
Using social media for political discourse is becoming common practice, especially around election time. One interesting aspect of this trend is the possibility of pulsing the public’s opinion about the elections, and that has attracted the interest of many researchers and the press. Allegedly, predicting electoral outcomes from social media data can be feasible and even simple. Positive results have been reported, but without an analysis on what principle enables them. Our work puts to test the purported predictive power of socialmedia metrics against the 2010 US congressional elections. Here, we applied techniques that had reportedly led to positive election predictions in the past, on the Twitter data collected from the 2010 US congressional elections. Unfortunately, we find no correlation between the analysis results and the electoral outcomes, contradicting previous reports. Observing that 80 years of polling research would support our findings, we argue that one should not be accepting predictions about events using social media data as a black box. Instead, scholarly research should be accompanied by a model explaining the predictive power of social media, when there is one.
- North America > United States > Massachusetts (0.05)
- North America > United States > Texas (0.04)
- North America > United States > Indiana (0.04)
- Europe > Germany (0.04)
- Research Report > Experimental Study (0.34)
- Research Report > New Finding (0.34)
- Information Technology > Services (1.00)
- Government > Voting & Elections (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)